make bad decision
AIs could be hacked with undetectable backdoors to make bad decisions
Artificial intelligence is increasingly used in business. But because of the way it is built, there is theoretical potential for the software to contain undetectable features that bypass its normal decision-making process, meaning it could be exploited by malicious third parties. For instance, an AI model tasked with shortlisting CVs for a job vacancy could be made to covertly prioritise any which include a deliberately obscure phrase.
Fear and loathing in machine learning - Smart Vision - Europe
Over the past two years I've noticed a steady stream of articles in the mainstream press and business journals centred on the themes of a) the dangers of machine learning 1 2 or b) the limitations of machine learning 3 4. Many of these articles refer to incidents where machine learning initiatives have echoed and exasperated our own biases, prejudices and (frankly racist) behaviours 5. Others have focused on their limitations with providing the sorts of'informed, idiosyncratic' recommendations that humans find effortless. However, for those of us that work in the field of predictive analytics where many of the algorithms at the heart of these stories are routinely used, 'machine learning' is nothing new. In fact, many of us are pretty bemused by the fact that the media has leapt on the phrase'machine learning' to stand for everything from multivariate statistics, association modelling and rule induction to operational research, cognitive computing and artificial intelligence. Rather like the word'algorithm' it's being used to cover pretty much any situation where software generates predictions in the form of risk scores, recommendations, estimates or classifications.